Nate B. Jones published a piece last week arguing that the single largest vulnerability in AI safety is not technical. It is communicative. The gap between what you tell an AI system to do and what you actually mean is where misalignment lives.
He is right. And he is describing half the problem.
Jones frames this as an individual skill: intent engineering. Learn to specify your constraints, your escalation triggers, your value hierarchy before you hand a task to an agent. Three questions that change how agents behave. A discipline that should be taught the way software engineering is taught.
All correct. But here is what I keep seeing in companies navigating AI transitions: even when individuals close their personal intent gap perfectly, the organisation still has one. And the organisational version is where the real damage compounds.
The gap nobody designed
Every company adopting AI has built a capability layer. Tools chosen, systems configured, prompts written, teams trained. That is the part the tutorials cover.
Almost none of them have built the governance layer: the structural architecture that connects what AI produces to how the business actually makes decisions.
When Jones talks about an AI agent that rewrites half a codebase when you asked it to refactor a module, that is an individual intent gap. One person, one prompt, one misalignment. It costs you a bad afternoon.
When five department heads are each adopting AI tools independently, each making reasonable local decisions, but nobody has designed how those tools, the decisions they inform, and the workflows they change fit together as a system, that is the organisational version. Same failure mode. Scaled to the org chart. And it costs you months of accumulated incoherence that nobody notices until it is expensive to fix.
This is what I call the governance gap. It is the organisational intent gap, and no individual's prompting skill can close it.
The structural questions nobody is asking
Jones identifies three questions that change the quality of human-agent interaction: What would I not want the agent to do, even if it accomplished the goal? Under what circumstances should the agent stop and ask? If the goal and a constraint conflict, which wins?
These are the right questions at the individual level. But organisations need to answer them at a structural level too. And the structural versions are harder, because they expose problems nobody wants to own.
Who has explicit authority over AI-affected decisions in your organisation right now? Not in the policy document. In practice. In most companies I work with, AI tools shape hiring recommendations, budget forecasts, product priorities, and customer strategies. But nobody deliberately decided who governs how AI-generated recommendations become business decisions. Engineers pick tools. Managers adopt outputs. Executives assume someone is governing it. Nobody is.
Where is your organisational understanding building up? AI platforms are racing to become the synthesis layer that connects information across all your tools. The organisation that controls synthesis controls everything above it. In most companies, this is happening by accident. Three departments building the company's memory in three different tools, in three different formats, with nobody translating.
When your AI tools disagree with your governance structure, which wins? Without an explicit answer, the tools win by default. Because the tools produce output constantly, and governance only activates when someone remembers to check.
The failure mode that should worry you most
Jones identifies what worries him most about AI risk: not a dramatic failure, but "a gradual erosion of human agency through millions of small misalignments that individually seem manageable and collectively aren't."
This is the one that keeps me up at night too. Specifically the organisational version, because it is invisible.
A dramatic AI failure triggers accountability. An agent sends unauthorised emails. Fabricates data. Acquires permissions it should not have. Someone notices. Headlines get written. Procurement teams update their criteria.
But organisational drift does not trigger anything. Stale context accumulating across multiple systems. Decisions made in one department that contradict decisions from another. Institutional knowledge building up inside vendor platforms nobody deliberately chose. Each of these is individually manageable. Collectively, they are a slow-moving structural failure. And no individual's intent engineering skill can prevent it, because the failure is not happening at the individual level. It is happening in the spaces between individuals, in the organisational layer that nobody designed.
Structure holds when stated values do not
Jones makes an observation that maps directly to what I see in companies: you cannot train scheming out of AI models. Anti-scheming training may just teach better-hidden scheming.
The organisational parallel is identical. You cannot train governance into a culture. You have to build it into a structure.
I see companies try the cultural approach constantly. "We have AI guidelines." "Our teams know the boundaries." "We trust our engineers to use good judgement." This is governance by assurance. It relies on people remembering the rules, interpreting them consistently, and applying them under pressure.
It degrades under pressure every time. When the quarterly deadline hits, when the board wants results, when a competitor ships first, stated values are the first thing that bends.
Jones himself points out that Anthropic, the company founded specifically because its CEO thought OpenAI was moving too fast, abandoned its core unilateral safety pledge under competitive pressure. Stated commitments buckled. What held were structural red lines: no AI-controlled weapons, no mass domestic surveillance. Those held even when the Pentagon threatened wartime production laws.
The structural constraints survived what the stated values could not. The same principle applies at your company. Just at a smaller scale.
Look up from the prompt
If you are reading Jones's piece and thinking about how to close the intent gap, good. Start there. His three questions are the right starting point for any individual delegation.
But then look up from the prompt and ask the structural questions.
If you asked five directors how AI output becomes a business decision in your organisation, would you get the same answer?
Who has explicit authority over how AI-generated recommendations become actions?
If you cancelled your primary AI platform tomorrow, what organisational knowledge would you lose that exists nowhere else?
Has anyone deliberately chosen where your organisation's AI-driven understanding accumulates, or did it just happen?
If you cannot answer these clearly, your organisation has a governance gap. And no amount of individual intent engineering will close it. Because the gap is not between one person and one agent. It is between your company and every AI system it touches.
The intent gap is real. Jones is right about that. But the biggest version of it does not live in your prompts. It lives in your org chart.
Mbali Chaise is the founder of HSTM OU, a structural advisory practice for AI governance. She works with organisations navigating AI transitions that have outpaced the structures meant to govern them.
Mbali Chaise is the founder of HSTM OU, a structural advisory practice helping companies build the governance layer for AI adoption. She works with CTOs, engineering leads, and operations directors at companies navigating AI transitions without the structural scaffolding to do it well.
The enforcement date is fixed. The structural work can begin now.
Book a scoping conversation